Assessing significance in a Markov chain without mixing.

نویسندگان

  • Maria Chikina
  • Alan Frieze
  • Wesley Pegden
چکیده

We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a [Formula: see text] value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a [Formula: see text] outlier compared with the sampled ranks (its rank is in the bottom [Formula: see text] of sampled ranks), then this observation should correspond to a [Formula: see text] value of [Formula: see text] This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an [Formula: see text]-outlier on the walk is significant at [Formula: see text] under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at [Formula: see text] is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Financial Risk Modeling with Markova Chain

Investors use different approaches to select optimal portfolio. so, Optimal investment choices according to return can be interpreted in different models. The traditional approach to allocate portfolio selection called a mean - variance explains. Another approach is Markov chain. Markov chain is a random process without memory. This means that the conditional probability distribution of the nex...

متن کامل

Analysis of the Spell of Rainy Days in Lake Urmia Basin using Markov Chain Model

In this study, the Frequency and the spell of rainy days was analyzed in Lake Uremia Basin using Markov chain model. For this purpose, the daily precipitation data of 7 synoptic stations in Lake Uremia basin were used for the period 1995- 2014. The daily precipitation data at each station were classified into the wet and dry state and the fitness of first order Markov chain on data series was e...

متن کامل

Fast and Slim Lifted Markov Chains

Metropolis-Hasting method allows for designing a reversible Markov chain P on a given graph G for a target stationary distribution π. Such a Markov chain may suffer from its slow mixing time due to reversibility. Diaconis, Holmes and Neal (1997) for the ring-like chain P , and later Chen, Lovasz and Pak (2002) for an arbitrary chain P provided an explicit construction of a non-reversible Markov...

متن کامل

Sampling Good Motifs with Markov Chains

Markov chain Monte Carlo (MCMC) techniques have been used with some success in bioin­ formatics [LAB93]. However, these results rely on heuristic estimates of a Markov chain’s mixing time. Without provable results, we cannot accurately judge the quality of an algorithm or the output it produces. Our goal is to remedy this situation in the context of the motif­finding problem. Using combinatoria...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Proceedings of the National Academy of Sciences of the United States of America

دوره 114 11  شماره 

صفحات  -

تاریخ انتشار 2017